# Korean optimization
A.X 4.0 Light Gguf
Apache-2.0
A.X 4.0 Light is a lightweight large language model developed by SKT AI Model Lab, built on Qwen2.5 and optimized for Korean understanding and enterprise deployment.
Large Language Model
Transformers Supports Multiple Languages

A
mykor
535
2
Hyperclovax SEED Text Instruct 0.5B
Other
A Korean-optimized text generation model with instruction-following capability, featuring lightweight design suitable for edge device deployment
Large Language Model
Transformers

H
naver-hyperclovax
7,531
60
3b Ko Ft Research Release Q4 K M GGUF
Apache-2.0
This is a 3B-parameter language model optimized for Korean, converted to GGUF format for compatibility with llama.cpp.
Large Language Model Korean
3
freddyaboulton
165
0
Llama DNA 1.0 8B Instruct
A state-of-the-art bilingual language model based on the Llama architecture, specially optimized for Korean understanding and generation while maintaining strong English capabilities.
Large Language Model
Transformers Supports Multiple Languages

L
dnotitia
661
58
Bge Reranker V2 M3 Ko
Apache-2.0
This is a Korean reranking model optimized based on BAAI/bge-reranker-v2-m3, primarily used for text reranking tasks.
Text Embedding Supports Multiple Languages
B
dragonkue
877
6
Llama VARCO 8B Instruct
Llama-VARCO-8B-Instruct is a generative model built on Llama. Through additional training, it performs excellently in Korean processing while maintaining English proficiency.
Large Language Model
Transformers Supports Multiple Languages

L
NCSOFT
2,981
74
Ko Llama 3 8B Instruct
Ko-Llama-3-8B-Instruct is a model specifically developed to enhance the performance of Korean language models, based on supervised fine-tuning of Meta-Llama-3-8B-Instruct.
Large Language Model
Transformers Supports Multiple Languages

K
davidkim205
140
8
Hkcode Solar Youtube Merged
MIT
A Korean language model further pretrained on SOLAR-10.7B, developed by the Fintech Department of Korea Polytechnics
Large Language Model
Transformers Korean

H
hyokwan
3,638
1
Akallama Llama3 70b V0.1 GGUF
Other
AkaLlama is a Korean large language model fine-tuned from Meta-Llama-3-70b-Instruct, focusing on multi-task practical applications
Large Language Model Supports Multiple Languages
A
mirlab
414
15
Ko Llama3 Luxia 8B
A Korean-optimized large language model developed by Saltlux AI Lab based on Meta Llama-3-8B, featuring an extended Korean tokenizer and pre-trained on 100GB of curated Korean data
Large Language Model
Transformers Supports Multiple Languages

K
saltlux
2,127
78
Llama 3 Open Ko 8B Gguf
A Korean language model continuously pre-trained on the Llama-3-8B framework, trained with over 60GB of deduplicated text data
Large Language Model Supports Multiple Languages
L
teddylee777
7,211
47
Llama 3 Open Ko 8B Instruct Preview
Other
A Korean language model based on continued pre-training of Llama-3-8B, trained on 60GB+ deduplicated publicly available text, supporting Korean and English.
Large Language Model
Transformers Supports Multiple Languages

L
beomi
6,014
60
Llama 3 Open Ko 8B
Other
A Korean language model continuously pre-trained based on Llama-3-8B, using over 60GB of deduplicated publicly available text for training, supporting Korean and English text generation.
Large Language Model
Transformers Supports Multiple Languages

L
beomi
6,729
146
K2S3 SOLAR 11b V2.0
Korean large language model fine-tuned based on SOLAR-10.7B-v1.0, specializing in Korean understanding and generation tasks
Large Language Model
Transformers Korean

K
Changgil
16
1
EEVE Korean Instruct 2.8B V1.0
Apache-2.0
Korean instruction model fine-tuned based on EEVE-Korean-2.8B-v1.0, optimized with DPO training
Large Language Model
Transformers Other

E
yanolja
2,197
24
EEVE Korean 10.8B V1.0
Apache-2.0
A Korean large language model extended from SOLAR-10.7B-v1.0, optimized for Korean understanding through vocabulary expansion and parameter-frozen training
Large Language Model
Transformers

E
yanolja
6,117
83
Kosolar 10.7B V0.2
Apache-2.0
A Korean vocabulary expansion version based on upstage/SOLAR-10.7B-v1.0, specifically fine-tuned for Korean web-crawled datasets.
Large Language Model
Transformers

K
yanolja
21
33
OPEN SOLAR KO 10.7B
Apache-2.0
Korean-enhanced version based on SOLAR-10.7B-v1.0, continuously pre-trained by expanding vocabulary and Korean corpus
Large Language Model
Transformers Supports Multiple Languages

O
beomi
1,151
63
Synatra 7B V0.3 RP GGUF
Synatra 7B V0.3 RP is a 7B-parameter Korean large language model based on the Mistral architecture, specializing in role-playing and Korean text generation tasks.
Large Language Model Korean
S
TheBloke
3,953
14
Koreanlm 3B
KoreanLM is an open-source project dedicated to developing Korean language models, aiming to address the scarcity of Korean learning resources and inefficient tokenization processing.
Large Language Model
Transformers Supports Multiple Languages

K
quantumaikr
33
2
Llama 2 70b Fb16 Korean
A version fine-tuned on Korean datasets based on the Llama2 70B model, focusing on Korean and English text generation tasks
Large Language Model
Transformers Supports Multiple Languages

L
quantumaikr
127
37
Kollama2 7b
KoLlama2 is an open-source project aimed at improving the Korean performance of the English-based large language model Llama2 and bringing a better language interaction experience to Korean users.
Large Language Model
Transformers Supports Multiple Languages

K
psymon
109
23
Kullm Polyglot 12.8b V2
Apache-2.0
KULLM v2 fine-tuned version based on EleutherAI/polyglot-ko-12.8b, supporting Korean multilingual large language model
Large Language Model
Transformers Korean

K
nlpai-lab
1,901
52
Koreanlm
KoreanLM is an open-source language model project specifically optimized for Korean, designed for Korean grammar and vocabulary characteristics, providing efficient tokenization solutions
Large Language Model
Transformers Supports Multiple Languages

K
quantumaikr
59
28
Pko T5 Base
pko-t5 is a T5 model specifically optimized for Korean, trained exclusively on Korean data using BBPE tokenization to address Korean segmentation issues.
Large Language Model
Transformers Korean

P
paust
874
19
KR ELECTRA Generator
A Korean-specific ELECTRA model developed by Seoul National University, excelling in informal text processing tasks
Large Language Model
Transformers Korean

K
snunlp
42.01k
1
Featured Recommended AI Models